IS

Compeau, Deborah R.

Topic Weight Topic Terms
0.614 usage use self-efficacy social factors individual findings influence organizations beliefs individuals support anxiety technology workplace
0.214 training learning outcomes effectiveness cognitive technology-mediated end-user methods environments longitudinal skills performance using effective method
0.204 competence experience versus individual disaster employees form npd concept context construct effectively focus functionalities front-end
0.179 performance results study impact research influence effects data higher efficiency effect significantly findings impacts empirical
0.168 validity reliability measure constructs construct study research measures used scale development nomological scales instrument measurement
0.149 results study research experiment experiments influence implications conducted laboratory field different indicate impact effectiveness future
0.139 effect impact affect results positive effects direct findings influence important positively model data suggest test
0.111 instrument measurement factor analysis measuring measures dimensions validity based instruments construct measure conceptualization sample reliability

Focal Researcher     Coauthors of Focal Researcher (1st degree)     Coauthors of Coauthors (2nd degree)

Note: click on a node to go to a researcher's profile page. Drag a node to reallocate. Number on the edge is the number of co-authorships.

Higgins, Christopher A. 2 Huff, Sid L. 1 Marcolin, Barbara L. 1 Munro, Malcolm C. 1
Causal models 2 Partial least squares 2 Self-efficacy 2 Competence 1
end user training 1 Empirical 1 End-User Computing 1 measurement 1
psychology 1 Software Skills 1 Theoretical Framework 1 User behavior 1

Articles (3)

Assessing User Competence: Conceptualization and Measurement. (Information Systems Research, 2000)
Authors: Abstract:
    Organizations today face great pressure to maximize the benefits from their investments in information technology (IT). They are challenged not just to use IT, but to use it as effectively as possible. Understanding how to assess the competence of users is critical in maximizing the effectiveness of IT use. Yet the user competence construct is largely absent from prominent technology acceptance and fit models, poorly conceptualized, and inconsistently measured. We begin by presenting a conceptual model of the assessment of user competence to organize and clarify the diverse literature regarding what user competence means and the problems of assessment. As an illustrative study, we then report the findings from an experiment involving 66 participants. The experiment was conducted to compare empirically two methods (paper and pencil tests versus self-report questionnaire), across two different types of software, or domains of knowledge (word processing versus spreadsheet packages), and two different conceptualizations of competence (software knowledge versus self-efficacy). The analysis shows statistical significance in all three main effects. How user competence is measured, what is measured, what measurement context is employed: all influence the measurement outcome. Furthermore, significant interaction effects indicate that different combinations of measurement methods, conceptualization, and knowledge domains produce different results. The concept of frame of reference, and its anchoring effect on subjects' responses, explains a number of these findings. The study demonstrates the need for clarity in both defining what type of competence is being assessed and in drawing conclusions regarding competence, based upon the types of measures used. Since the results suggest that definition and measurement of the user competence construct can change the ability score being captured, the existing information system (IS) models of usage must contain the concept of an ability rating. We conclude by discussing how user competence can be incorporated into the Task-Technology Fit model, as well as additional theoretical and practical implications of our research.
Application of Social Cognitive Theory to Training for Computer Skills. (Information Systems Research, 1995)
Authors: Abstract:
    While computer training is widely recognized as an essential contributor to the productive use of computers in organizations, very little research has focused on identifying the processes through which training operates, and the relative effectiveness of different methods for such training. This research examined the training process, and compared a behavior modeling training program, based on Social Cognitive Theory (Bandura 1977, 1978, 1982, 1986), to a more traditional, lecture-based program. According to Social Cognitive Theory, watching others performing a behavior, in this case interacting with a computer system, influences the observers' perceptions of their own ability to perform the behavior, or self-efficacy, and the expected outcomes that they perceive, as well as providing strategies for effective performance. The findings provide only partial support for the research model. Self-efficacy exerted a strong influence on performance in both models. In addition, behavior modeling was found to be more effective than the traditional method for training in Lotus 1-2-3, resulting in higher self-efficacy and higher performance. For WordPerfect, however, modeling did not significantly influence performance. This finding was unexpected, and several possible explanations are explored in the discussion. Of particular surprise were the negative relationships found between outcome expectations and performance. Outcome expectations were expected to positively influence performance, but the results indicated a strong negative effect. Measurement limitations are presented as the most plausible explanation for this result, but further research is necessary to provide conclusive explanations.
Computer Self-Efficacy: Development of a Measure and Initial Test. (MIS Quarterly, 1995)
Authors: Abstract:
    This paper discusses the role of individuals' beliefs about their abilities to competently use computers (computer self-efficacy) in the determination of computer use. A survey of Canadian managers and professionals was conducted to develop and validate a measure of computer self-efficacy and to assess both its impacts and antecedents. Computer self-efficacy was found to exert a significant influence on individuals' expectations of the outcomes of using computers, their emotional reactions to computers (affect and anxiety), as well as their actual computer use. An individual's self-efficacy and outcome expectations were found to be positively influenced by the encouragement of others in their work group, as well as others' use of computers. Thus, self-efficacy represents an important individual trait, which moderates organizational influences (such as encouragement and support) on an individuals decision to use computers. Understanding self-efficacy, then, is important to the successful implementation of systems in organizations. The existence of a reliable and valid measure of self-efficacy makes assessment possible and should have implications for organizational support, training, and implementation.